Current:Home > FinanceCalifornia advances measures targeting AI discrimination and deepfakes -ProgressCapital
California advances measures targeting AI discrimination and deepfakes
View
Date:2025-04-15 12:13:09
SACRAMENTO, Calif. (AP) — As corporations increasingly weave artificial intelligence technologies into the daily lives of Americans, California lawmakers want to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography.
The efforts in California — home to many of the world’s biggest AI companies — could pave the way for AI regulations across the country. The United States is already behind Europe in regulating AI to limit risks, lawmakers and experts say, and the rapidly growing technology is raising concerns about job loss, misinformation, invasions of privacy and automation bias.
A slew of proposals aimed at addressing those concerns advanced last week, but must win the other chamber’s approval before arriving at Gov. Gavin Newsom’s desk. The Democratic governor has promoted California as an early adopter as well as regulator, saying the state could soon deploy generative AI tools to address highway congestion, make roads safer and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.
With strong privacy laws already in place, California is in a better position to enact impactful regulations than other states with large AI interests, such as New York, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.
“You need a data privacy law to be able to pass an AI law,” Rice said. “We’re still kind of paying attention to what New York is doing, but I would put more bets on California.”
California lawmakers said they cannot wait to act, citing hard lessons they learned from failing to reign in social media companies when they might have had a chance. But they also want to continue attracting AI companies to the state.
Here’s a closer look at California’s proposals:
FIGHTING AI DISCRIMINATION AND BUILDING PUBLIC TRUST
Some companies, including hospitals, already use AI models to define decisions about hiring, housing and medical options for millions of Americans without much oversight. Up to 83% of employers are using AI to help in hiring, according to the U.S. Equal Employment Opportunity Commission. How those algorithms work largely remains a mystery.
One of the most ambitious AI measures in California this year would pull back the curtains on these models by establishing an oversight framework to prevent bias and discrimination. It would require companies using AI tools to participate in decisions that determine results and to inform people affected when AI is used. AI developers would have to routinely make internal assessments of their models for bias. And the state attorney general would have authority to investigate reports of discriminating models and impose fines of $10,000 per violation.
AI companies also might soon be required to start disclosing what data they’re using to train their models.
PROTECTING JOBS AND LIKENESS
Inspired by the months-long Hollywood actors strike last year, a California lawmaker wants to protect workers from being replaced by their AI-generated clones — a major point of contention in contract negotiations.
The proposal, backed by the California Labor Federation, would let performers back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. It would also require that performers be represented by an attorney or union representative when signing new “voice and likeness” contracts.
California may also create penalties for digitally cloning dead people without the consent of their estate, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s permission.
REGULATING POWERFUL GENERATIVE AI SYSTEMS
Real-world risks abound as generative AI creates new content such as text, audio and photos in response to prompts. So lawmakers are considering requiring guardrails around “extremely large” AI systems that have the potential to spit out instructions for creating disasters — such as building chemical weapons or assisting in cyberattacks — that could cause at least $500 million in damages. It would require such models to have a built-in “kill switch,” among other things.
The measure, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices, including for still-more powerful models that don’t yet exist. The state attorney general also would be able to pursue legal actions in case of violations.
BANNING DEEPFAKES INVOLVING POLITICS OR PORNOGRAPHY
A bipartisan coalition seeks to facilitate prosecuting people who use AI tools to create images of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if the materials are not depicting a real person, law enforcement said.
A host of Democratic lawmakers are also backing a bill tackling election deepfakes, citing concerns after AI-generated robocalls mimicked President Joe Biden’s voice ahead of New Hampshire’s recent presidential primary. The proposal would ban “materially deceptive” deepfakes related to elections in political mailers, robocalls and TV ads 120 days before Election Day and 60 days thereafter. Another proposal would require social media platforms to label any election-related posts created by AI.
veryGood! (213)
Related
- Head of the Federal Aviation Administration to resign, allowing Trump to pick his successor
- 'Model inmate': Missouri corrections officers seek death penalty reprieve for Brian Dorsey
- In Washington state, pharmacists are poised to start prescribing abortion drugs
- Alabama calls nitrogen execution method ‘painless’ and ‘humane,’ but critics raise doubts
- Giants, Lions fined $200K for fights in training camp joint practices
- Dan Morgan hired as general manager of Carolina Panthers
- Burton Wilde: Four Techniques for Securely Investing in Cryptocurrencies.
- Spain’s top court says the government broke the law when it sent child migrants back to Morocco
- $73.5M beach replenishment project starts in January at Jersey Shore
- Woman charged with killing Hollywood consultant Michael Latt pleads not guilty
Ranking
- Megan Fox's ex Brian Austin Green tells Machine Gun Kelly to 'grow up'
- DeSantis Called for “Energy Dominance” During White House Run. His Plan Still is Relevant to Floridians, Who Face Intensifying Climate Impacts
- Pennsylvania woman plans to use insanity defense in slaying, dismemberment of parents
- This Hair Cream Was the Only Thing That Helped My Curls Survive the Hot & Humid Florida Weather
- Civic engagement nonprofits say democracy needs support in between big elections. Do funders agree?
- Burton Wilde: My Insights on Value Investing
- Former gang leader charged with killing Tupac Shakur gets new lawyer who points to ‘historic’ trial
- Men are going to brutal boot camps to reclaim their masculinity. How did we get here?
Recommendation
US wholesale inflation accelerated in November in sign that some price pressures remain elevated
Why diphtheria is making a comeback
Spain’s top court says the government broke the law when it sent child migrants back to Morocco
Burton Wilde : Three Pieces of Advice and Eight Considerations for Stock Investments.
Are Instagram, Facebook and WhatsApp down? Meta says most issues resolved after outages
Georgia lawmakers advance bill to revive disciplinary commission for state prosecutors
Tribes, environmental groups ask US court to block $10B energy transmission project in Arizona
After stalling in 2023, a bill to define antisemitism in state law is advancing in Georgia